398 research outputs found

    Majorization algorithms for inspecting circles, ellipses, squares, rectangles, and rhombi

    Get PDF
    In several disciplines, as diverse as shape analysis, locationtheory, quality control, archaeology, and psychometrics, it can beof interest to fit a circle through a set of points. We use theresult that it suffices to locate a center for which the varianceof the distances from the center to a set of given points isminimal. In this paper, we propose a new algorithm based oniterative majorization to locate the center. This algorithm isguaranteed to yield a series nonincreasing variances until astationary point is obtained. In all practical cases, thestationary point turns out to be a local minimum. Numericalexperiments show that the majorizing algorithm is stable and fast.In addition, we extend the method to fit other shapes, such as asquare, an ellipse, a rectangle, and a rhombus by making use ofthe class of lpl_p distances and dimension weighting. In addition,we allow for rotations for shapes that might be rotated in theplane. We illustrate how this extended algorithm can be used as atool for shape recognition.iterative majorization;location;optimization;shape analysis

    VIPSCAL: A combined vector ideal point model for preference data

    Get PDF
    In this paper, we propose a new model that combines the vector model and theideal point model of unfolding. An algorithm is developed, called VIPSCAL, thatminimizes the combined loss both for ordinal and interval transformations. As such,mixed representations including both vectors and ideal points can be obtained butthe algorithm also allows for the unmixed cases, giving either a complete idealpointanalysis or a complete vector analysis. On the basis of previous research,the mixed representations were expected to be nondegenerate. However, degeneratesolutions still occurred as the common belief that distant ideal points can be represented by vectors does not hold true. The occurrence of these distant ideal points was solved by adding certain length and orthogonality restrictions on the configuration. The restrictions can be used both for the mixed and unmixed cases in several ways such that a number of different models can be fitted by VIPSCAL.unfolding;ideal point model;vector model

    Majorization algorithms for inspecting circles, ellipses, squares, rectangles, and rhombi

    Get PDF
    In several disciplines, as diverse as shape analysis, location theory, quality control, archaeology, and psychometrics, it can be of interest to fit a circle through a set of points. We use the result that it suffices to locate a center for which the variance of the distances from the center to a set of given points is minimal. In this paper, we propose a new algorithm based on iterative majorization to locate the center. This algorithm is guaranteed to yield a series nonincreasing variances until a stationary point is obtained. In all practical cases, the stationary point turns out to be a local minimum. Numerical experiments show that the majorizing algorithm is stable and fast. In addition, we extend the method to fit other shapes, such as a square, an ellipse, a rectangle, and a rhombus by making use of the class of lpl_p distances and dimension weighting. In addition, we allow for rotations for shapes that might be rotated in the plane. We illustrate how this extended algorithm can be used as a tool for shape recognition

    Logistic regression with sparse common and distinctive covariates

    Get PDF
    Having large sets of predictor variables from multiple sources concerning the same individuals is becoming increasingly common in behavioral research. On top of the variable selection problem, predicting a categorical outcome using such data gives rise to an additional challenge of identifying the processes at play underneath the predictors. These processes are of particular interest in the setting of multi-source data because they can either be associated individually with a single data source or jointly with multiple sources. Although many methods have addressed the classification problem in high dimensionality, the additional challenge of distinguishing such underlying predictor processes from multi-source data has not received sufficient attention. To this end, we propose the method of Sparse Common and Distinctive Covariates Logistic Regression (SCD-Cov-logR). The method is a multi-source extension of principal covariates regression that combines with generalized linear modeling framework to allow classification of a categorical outcome. In a simulation study, SCD-Cov-logR resulted in outperformance compared to related methods commonly used in behavioral sciences. We also demonstrate the practical usage of the method under an empirical dataset

    VIPSCAL: A combined vector ideal point model for preference data

    Get PDF
    In this paper, we propose a new model that combines the vector model and the ideal point model of unfolding. An algorithm is developed, called VIPSCAL, that minimizes the combined loss both for ordinal and interval transformations. As such, mixed representations including both vectors and ideal points can be obtained but the algorithm also allows for the unmixed cases, giving either a complete ideal pointanalysis or a complete vector analysis. On the basis of previous research, the mixed representations were expected to be nondegenerate. However, degenerate solutions still occurred as the common belief that distant ideal points can be represented by vectors does not hold true. The occurrence of these distant ideal points was solved by adding certain length and orthogonality restrictions on the configuration. The restrictions can be used both for the mixed and unmixed cases in several ways such that a number of different models can be fitted by VIPSCAL

    Bayesian multilevel latent class models for the multiple imputation of nested categorical data

    Get PDF
    With this article, we propose using a Bayesian multilevel latent class (BMLC; or mixture) model for the multiple imputation of nested categorical data. Unlike recently developed methods that can only pick up associations between pairs of variables, the multilevel mixture model we propose is flexible enough to automatically deal with complex interactions in the joint distribution of the variables to be estimated. After formally introducing the model and showing how it can be implemented, we carry out a simulation study and a real-data study in order to assess its performance and compare it with the commonly used listwise deletion and an available R-routine. Results indicate that the BMLC model is able to recover unbiased parameter estimates of the analysis models considered in our studies, as well as to correctly reflect the uncertainty due to missing data, outperforming the competing methods

    Er3+-to-Yb3+ and Pr3+-to-Yb3+ energy transfer for highly efficient near-infrared cryogenic optical temperature sensing

    Get PDF
    Here, the very high thermal sensing capability of Er3+,Yb3+ doped LaF3 nanoparticles, where Er3+-to-Yb3+ energy transfer is used, is reported. Also Pr3+,Yb3+ doped LaF3 nanoparticles, with Pr3+-to-Yb3+ energy transfer, showed temperature sensing in the same temperature regime, but with lower performance. The investigated Er3+,Yb3+ doped LaF3 nanoparticles show a remarkably high relative sensitivity S-r of up to 6.6092% K-1 (at 15 K) in the near-infrared (NIR) region, in the cryogenic (15-105 K) temperature region opening a whole new thermometric system suitable for advanced applications in the very low temperature ranges. To date reports on NIR cryogenic sensors have been very scarce

    TeSen : tool for determining thermometric parameters in ratiometric optical thermometry

    Get PDF
    This work presents the method and numerical program along with graphical user interface (GUI) for calculating the standard parameters necessary to evaluate luminescence ratiometric thermometers - the thermometric parameter Delta, absolute sensitivity S-a, and relative sensitivity S-r. Despite the high interest in temperature sensing materials, to the best of our knowledge, no such tool has been reported up to date. This is currently usually done by researchers using a trial and error method and is a rather laborious task, with high risk of errors. The undoubtful benefit of employing an optimization technique lies in the very fast and precise determination of the parameters employing different models. The thermometric parameters Delta, S-a and S-r are calculated based on the luminescence emission spectra measured over a certain temperature range. Using the TeSen tool the thermometric parameters Delta can be calculated based both on the peak maxima and integrated surface areas under the peaks. The tool also allows testing the ratio of multiple peaks, different peak ranges, and different temperature ranges in a very convenient way. In this work TeSen tool was used to study several new sensor materials, presenting new cases of single and dual center luminescent ratiometric theremometers

    Bayesian latent class models for the multiple imputation of categorical data

    Get PDF
    Latent class analysis has beer recently proposed for the multiple imputation (MI) of missing categorical data, using either a standard frequentist approach or a nonparametric Bayesian model called Dirichlet process mixture of multinomial distributions (DPMM). The main advantage of using a latent class model for multiple imputation is that it is very flexible in the sense that it car capture complex relationships in the data given that the number of latent classes is large enough. However, the two existing approaches also have certain disadvantages. The frequentist approach is computationally demanding because it requires estimating many LC models: first models with different number of classes should be estimated to determine the required number of classes and subsequently the selected model is reestimated for multiple bootstrap samples to take into account parameter uncertainty during the imputation stage. Whereas the Bayesian. Dirichlet process models perform the model selection and the handling of the parameter uncertainty automatically, the disadvantage of this method is that it tends to use a too small number of clusters during the Gibbs sampling, leading to an underfitting model yielding invalid imputations. In this paper, we propose an alternative approach which combined the strengths of the two existing approaches; that is, we use the Bayesian standard latent class model as an imputation model. We show how model selection can be performed prior to the imputation step using a single run of the Gibbs sampler and, moreover, show how underfitting is prevented by using large values for the hyperparameters of the mixture weights. The results of two simulation studies and one real-data study indicate that with a proper setting of the prior distributions, the Bayesian latent class model yields valid imputations and outperforms competing methods
    • …
    corecore